Dialogue models are able to generate coherent and fluent responses, but they can still be challenging to control and may produce non-engaging, unsafe results. This unpredictability diminishes user trust and can hinder the use of the models in the real world. To address this, we introduce DialGuide, a novel framework for controlling dialogue model behavior using natural language rules, or guidelines. These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent. We evaluate DialGuide on three tasks in open-domain dialogue response generation: guideline selection, response generation, and response entailment verification. Our dataset contains 10,737 positive and 15,467 negative dialogue context-response-guideline triplets across two domains - chit-chat and safety. We provide baseline models for the tasks and benchmark their performance. We also demonstrate that DialGuide is effective in the dialogue safety domain, producing safe and engaging responses that follow developer guidelines.
translated by 谷歌翻译
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multi-task learning, but also hinders the model's ability to generalize to new domains or tasks that were not seen during training, which is crucial for real-world applications. In this paper, we propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization of unified models. We design the task configurations to explicitly specify the task type, as well as its input and output types. We show that this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations that apply novel input-output combinations in a zero-shot manner. We demonstrate via experiments over ten table-to-text tasks that our method outperforms the UnifiedSKG baseline by noticeable margins in both in-domain and zero-shot settings, with average improvements of +0.5 and +12.6 from using a T5-large backbone, respectively.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Single-cell technologies are revolutionizing the entire field of biology. The large volumes of data generated by single-cell technologies are high-dimensional, sparse, heterogeneous, and have complicated dependency structures, making analyses using conventional machine learning approaches challenging and impractical. In tackling these challenges, deep learning often demonstrates superior performance compared to traditional machine learning methods. In this work, we give a comprehensive survey on deep learning in single-cell analysis. We first introduce background on single-cell technologies and their development, as well as fundamental concepts of deep learning including the most popular deep architectures. We present an overview of the single-cell analytic pipeline pursued in research applications while noting divergences due to data sources or specific applications. We then review seven popular tasks spanning through different stages of the single-cell analysis pipeline, including multimodal integration, imputation, clustering, spatial domain identification, cell-type deconvolution, cell segmentation, and cell-type annotation. Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages. Deep learning tools and benchmark datasets are also summarized for each task. Finally, we discuss the future directions and the most recent challenges. This survey will serve as a reference for biologists and computer scientists, encouraging collaborations.
translated by 谷歌翻译
19009年的大流行急剧催化了电子购物者的扩散。电子购物的急剧增长无疑会对旅行需求产生重大影响。结果,运输建模者对电子购物需求建模的能力变得越来越重要。这项研究开发了预测家庭每周送货频率的模型。我们使用经典计量经济学和机器学习技术来获得最佳模型。发现社会经济因素,例如拥有在线杂货会员资格,家庭成员的平均年龄,男性家庭成员的百分比,家庭中的工人数量以及各种土地使用因素会影响房屋送货的需求。这项研究还比较了机器学习模型和经典计量经济学模型的解释和表现。在通过机器学习和计量经济学模型确定的变量效果中找到了一致性。但是,具有相似的召回精度,有序的概率模型是一个经典的计量经济学模型,可以准确预测家庭交付需求的总分布。相反,两个机器学习模型都无法匹配观察到的分布。
translated by 谷歌翻译
过渡到成年是许多家庭的重要生活阶段。先前的研究表明,具有智力或发展的年轻人(IDD)比同龄人面临的挑战更多。这项研究是为了探索如何使用自然语言处理(NLP)方法,尤其是无监督的机器学习,以帮助心理学家分析情绪和情感,并使用主题建模来确定年轻人IDD及其家人所拥有的常见问题和挑战。此外,将结果与从没有IDD的年轻人那里获得的结果进行了比较。研究结果表明,NLP方法对于心理学家分析情绪,进行跨案例分析并从对话数据中汇总关键主题非常有用。我们的Python代码可在https://github.com/mlaricheva/emotion_topic_modeling上找到。
translated by 谷歌翻译
大型视力模型的无监督预训练方法已显示出可以提高下游监督任务的性能。为卫星图像开发类似的技术带来了重要的机会,因为未标记的数据很丰富,并且固有的时间和多光谱结构提供了途径,以进一步改善现有的训练策略。在本文中,我们提出了Satmae,这是基于蒙面自动编码器(MAE)的时间或多光谱卫星图像的预训练框架。为了利用时间信息,我们包括一个时间嵌入以及跨时间独立掩盖图像贴片。此外,我们证明将多光谱数据编码为具有不同光谱位置编码的频段组是有益的。我们的方法在基准数据集(最高$ \ uparrow $ 7 \%)上的监督学习绩效方面都对先前最先前的技术产生了强大的改进,以及在下游遥感任务(包括土地)上的转移学习绩效封面分类(最多$ \ uparrow $ 14 \%)和语义细分。
translated by 谷歌翻译
尽管更多的层和更多的参数通常提高了模型的准确性,但是这样的大型模型通常具有较高的计算复杂性,并且需要大记忆,这超过了小型设备进行推理的容量,并且会产生长时间的训练时间。此外,即使在高性能服务器中,也很难负担长期训练时间和大型模型的推理时间。作为将大型深层模型(教师模型)压缩为紧凑模型(学生模型)的有效方法,知识蒸馏是一种与大型模型打交道的有前途的方法。现有的知识蒸馏方法无法利用可用的弹性计算资源,并对应于低效率。在本文中,我们提出了一个用于知识蒸馏的弹性深度学习框架,即EDL-DIST。 EDL-DIST的优势是三倍。首先,推论和训练过程是分开的。其次,可以利用弹性可用的计算资源来提高效率。第三,支持训练和推理过程的故障耐受性。我们进行了广泛的实验,以表明EDL-DIST的吞吐量比基线方法(在线知识蒸馏)快3.125倍,而精度相似或更高。
translated by 谷歌翻译